Skip to main content

Automatic evaluation technique for certain types of open questions in semantic learning systems

Abstract

In this paper, we propose a methodology to enhance the evaluation tools in semantic learning systems. Our proposal’s aim is to evaluate two types of open questions in hybrid exams. The proposed technique in the first type MOQ (Multi Operations Question) uses the matrix concept for fuzzy score. But POQ (Proof Open Question) is more complicated so we use direct connect to learning objects which saved as ontology based. Also take into consideration the dependence among learning objects so we merge the universal ontology with weight matrix.

The proposed methodology has been applied to the case study of the mathematical multi operations question and the proof question on a logic course in a hybrid exam.

Introduction

Recently, using computers and information technology are making revolution in education systems. They have many advantages as low costs and internationality. Until now, the new generation of e-learning systems is applicable in many fields and hybrid fields as in reference [1]. Consequently, online exams are widely used. Online exams are more convenient and flexible relative to traditional exams. They also reduce the overall expenses of processing exams especially in saving papers, storage, and materials’ costs. The easiest type of questions is closed questions as multiple-choice questions. It is straightforward and does not require any text mining, Artificial Intelligence (AI), Natural Language Processing (NLP) techniques or algorithms [2].

However, multiple-choice questions can’t determine the skills of students in writing and expressing. In some fields, educators prefer to have essay questions to grade more realistically students’ skills.

Open questions are considered to be the most appropriate, because they are the most natural and they produce a better degree of thought. They help to evaluate the understanding of ideas, the students’ ability to organize material and develop reasoning, and to evaluate the originality of the proper thoughts. We can say that using open questions evaluation tools is good for understanding the different human skills. That’s human-centered computing (HCC) feature. We can classify this work as a version in preprocess of analysis the human skills. Where there are three large areas of HCC activities (production, analysis and interaction). We work deeply in this area in reference [3]. Also we used weight matrix with ontology to evaluate proof questions to ameliorate the evaluation. Where, we can evaluate the student’s ability to relate the knowledge among each other. In this work, we generate a new scoring function by using weight matrix to measure the student’s ability to connect knowledge in her/his mined.

However, they are much more difficult to evaluate than more restricted tests such as multiple choice tests. When student calculates some mathematical formula; not only the final result is interesting for us but also the deduction process is important too. We can evaluate the deduction path. We can check if the student understands this problematic or not by evaluating the steps.

The features of open questions in reference [4] are: No fixed method, No fixed answer or many possible answers, Solved in different ways and on different levels (accessible to mixed abilities), take a permission to a natural mathematical way of thinking, Develop reasoning & communication skills and open’ creativity and imagination when relates to real-life context.

There are many types of open questions as a problem to solve with missing data/hidden assumptions, Proof questions, multi steps problem, Problem to explain a concept/procedure/error, Problem Posing, Real-life/Practical problems, Oral questions, investigative problems [compare, contrast, classify, test hypothesis and generalize] and so on.

In this paper, we focus on the POQ and MOQ.

There are different kinds of communication infrastructures between e-learning content objects and e-learning platforms. For successful application of any IES (Intelligent Educational System), it is necessary to get information about a learner’s knowledge [5, 6]. So we propose to direct connection of the learning materials to an evaluation tool.

The knowledge is represented through ontology as an artificial intelligent knowledge representation method [7]. This technology is currently being used for representing human knowledge and as critical components in knowledge management, semantic web, business to business applications, bioinformatics, e-learning, etc. In particular, using ontologies in E-learning for different purposes is commonly accepted in the community [6]. Ontologies allow representing, in a shareable and reusable manner, the knowledge involved in the evaluation processes. The learning objects in the same level of Ontology based are related if they are in the same ontology class. So in proof questions, we can generate weight matrix as in FCM [8]. In correct solution only the weight function calculates weight for rules was used. But we used only 1 if there is a relation (dependent) and 0 if there is no relation (independent). Then we can use the sum of this matrix as another variable to evaluate the student.

The paper is organized as follows. Section two presents the literature review, section three for proposal methodology, then section four implements the proposed method in the case study and finally the conclusion and further work.

Literature review

Several methodologies have been proposed to solve the problems in automatic evaluation of open questions. Some of them are summarized below:

Chang et al. [9] made a comparative study between the different scoring methods. They also studied the different types of exams and their effect on reducing the possibility of guessing in multiple choice questions.

And Rein [10] proposed an intelligent system to help in mathematical problems. The system is similar in concept to the programming languages’ technology that is called Intelligence technology where software developers will be assisted through programming by showing them possible actions and mistakes while typing.

Also Mu et al. [11] discussed an approach for the automatic grading of code assignments. The developed tool assesses some of the issues with the code such as evaluating the performance and logical errors.

SMT (Satisfiablility Module Theory) solver is one of the methods for a certain type of proof. It isn’t applied on exam until now. It is formal method example. There are many SMT solvers as in references [1214]. SMT solver makes an automatic theorem prove. The SMT solver job is a decision problem to determine if a given logic formula is satisfiable with respect to a combination of theories expressed in first-order logic. There is yearly competition SMT-COMP [15].

In MOQ type, the evaluation by using the set of correct answers is the traditional method. But this method wasn’t respect answer’s order steps [16]. There is also the evaluation by using vector concept. It is more complicated but respect the order of the answer. So the solution must be exactly similar the template of the model answer.

Reference [8] proposed a FCM to determine the concepts dependences. That is by using the network graphic representation. Fuzzy concepts used to represent learning material domain concepts’ knowledge dependencies, adaptive learning system knowledge representation [17]. It also represents the concept’s impact strength over the other related concepts.

Proposed method

In this section, we present our proposal to handle the evaluation tool to evaluate the hybrid exam. Exactly the exam has the three types of questions, which presented into three subsections. MOQ section has a variable number of operations or steps, POQ section and the old type which is the closed question.

Proof open questions (POQ)

The proof questions are mathematical type of open questions. This type may be based on inference and reasoning through the constructed solutions. Also this type has dependence among concepts. When we solve problems there are some rules that may/may not depend on each other ex. Calculate average depend on the summation and the division over the summation.

In this part, we propose to connect the answers directly with the learning content’s objects not with the model answer. So this part of the proposal tool built on the semantic knowledge representation” universal ontology model” where the extent domain ontology may require updates to solve domain problems. This helps to prevent the restriction over specific and determined answers. In contrast to such a thing, we have a variety and flexibility to handle different meanings and provide the flexibility to process over any constructed model.

The connected process in each step of the evaluation tool depends on the keys from solution. The keys in the POQ are the rules and theorems which are used. The student implements rules in such a way to solve problems. This satisfies the aim of the course.

The semantic e-learning system contains ontology modeling that has a design feature for further using in Content Management System (CMS).

We use CMS to implement our technique through interact admin with the knowledge represented by ontology in an appropriate way supported by their e-learning web application system. We make a bridge between a semantic system built using a semantic web language and the online e-learning web applications, using our system in e-learning tests capacitate our users to interact and access their open questions included in online exams.

We could load the developed course content model over CMS. In case a change occurs in a syllabus’ learning contents (ex. updating, modifying) i.e. insert new learning objects or introduce new items or even deleting from our learning syllabus contents. We associate the required model answers in the generated model with their corresponding constructed questions. The developed system considers only the required concepts and answers that belong to their questions. The constructed answers are checked over the generated model.

Each correct equation gains its percentage score value that is stored into the student Database. After submitting these values, the teacher may preview the student’s answers and their scores evaluations, hence scoring and feedback for each student performance and cognitive abilities measurements can be obtained.

Multi operations question (MOQ)

For the second questions part that if we have the open question type has a variable numbers of steps, where the solution steps aren’t unique. In this type of questions used the matrices concept to merge vector evaluation technique and set evaluation technique i.e.

Set evaluation technique +h vector evaluation technique = matrix evaluation technique

Where +h means hybrid technique. This update is the main idea in the algorithm of the evaluation tool can automate the human rate in scoring. Also using the matrices concept makes the proposed method adequate for any answer that has ordered steps.

In general, we find that some final solution values are dependent on the previous existing values, i.e. we can’t reach the final correct answer value if the previous one related to it is wrong. So while writing created solution values, the steps should be set in a correct order with correct values. In most cases the order for each solution is important. The vector evaluation restricts the position for each value in the answer template, with the sets evaluation concern with the number of all possible values without restricting its position in the solution. So we propose the evaluation matrix technique which has a set of array values, generates the set of all possible correct values that have a probability to exist in the solution, related to the position importance of each item in the created solution, and evaluates answers relatively and absolutely. So we could measure the similarity between student answers and a certain row in the generated solutions’ matrix.

To prevent plagiarism sometimes teachers require descriptive details for each answers’ question so decrease the number of steps and decrease scores as well. Each item in the evacuation matrix is assigned a specific score. If a correspondence exists between student and teacher items then score is gained, otherwise it isn’t. The sum of all these scores represents the student’s total score. The general form of the evaluation matrix that contains the set of all possible is:

I T = A B C D E A , B C D E O A , B , C D E O O A , B , C , D E O O O E O O O O 100 % 80 % 60 % 40 % 20 %

The general form for student solution that has the set of all possibly created values is:

I ST = x z 1 x z 2 x zm

z = reference the row number in the evaluation matrix, m= number of student solution values.

Algorithm steps

I -Input: Teacher’s question, teacher’s model answers and student’s solutions.

II-The processes:

Count mathematical operations (n-elements) in a teacher’s question.

Assign each item in a teacher row answer solution a specific score.

Create a matrix.

Insert teacher’s answers/items within this matrix as a first row.

Generate next row, the 1st item is 2nd item in the previous row then proceed in this way until reach last item.

Put the student’s solution values into an array.

Compare between the student solution array and each row in the created matrix.

If there is a similarity between the array and a certain row, then calculate the percentage which depends upon several variables as the row number and the teacher request. Otherwise the score is zero.

Total score= question mark * percentage.

III - The output is the total score.

Through this algorithm we can generate the set of all possible values within our matrix make the system more accurate and support descriptive details. Question is evaluated absolutely, relatively, or absolutely and relatively whether a correspondence exists between item-item or item-set of items. The characteristics of vectors and sets are combined together to form matrix structure model.

Now, we explain the main relations between the proposed evaluation tools and the e-learning system. We must talk about these relations because the proposed evaluation tools especially POQ tool can’t work dependently. It used learning objects from e-learning system. But our proposal depend on new generation of e-learning which build the learning object by using Ontology based. In this case the e-learning system called semantic e-learning.

The Figure 1 shows the infrastructure of the evaluation part in the proposed semantic e-learning. The CMS is a control management system or the control panel of the e-learning system. The teacher can control the icons from this side. He/She- if has experience in using ontology- can change or insert the learning objects by using the update learning contents icon. It is gate to connect to the universal ontology of many related courses in certain field. This ontology instead of learning objects database. The other icons in CMS side are gates to send the hybrid exam to the students or to send the model answers ( final solution template) to MOQ tool. The student has the exam and can send the answers from student GUI. The student’s answers are classified in this first version in three parts. The first part is the original closed questions which use original tools in the e-learning system. The second type of questions sends to MOQ tool which explained in more details in Section Multi Operations Question (MOQ) and applied in Section Handle MOQ. The third part of questions is POQ which handling in POQ tool. The POQ tool needs certain information from learning objects ontology. The connection between Ontology and POQ tool is by XML to OWL converter in protégé platform or by semantic web rule language (SWRL) tool also from protégé platform. Section Proof Open Questions (POQ) and section Handle POQ explain more details about POQ type of questions and application in certain example. Then the proposed POQ and MOQ tools calculate the score and save the result in the student’s database.

Figure 1
figure 1

The infrastructure of the semantic e-learning evaluation part.

Implementation

In this section, we apply the proposal on certain course by Protégé platform and its plugin library in subsection Create Learning Object Materials Ontology to handle OWL ontology. Also we use SCORMCLOUD for Drupel which needn’t to separate LMS. The algorisms were presented in subsections Handle POQ and Handle MOQ were uploaded as icons as evaluation tools. The proposed algorithms were written by PHP web scripting language and WampServer connecting scripts codes. So we use XML language to connect OWL to PHP.

To present our proposal, we needed real example. So we created simple part of Ontology based for Boolean algebra course by protégé platform which displayed in subsection Create Learning Object Materials Ontology. Then in subsection Handle POQ we presented different solutions for example of proof question and displayed part of the PHP code to handle the scoring function and how connected to ontology and also the weight function. That is for matching the solutions with rules. Finally, in subsection Handle MOQ we presented example for multi operations question and how matrix was generated and part of score function for this type.

Create learning object materials ontology

The first step is building the ontology of the course domain. This is done by defining ontology classes and arranging them in a hierarchy super class and subclass using top-down approach [18, 19]. In this approach, we define the concepts and the rules. Common concepts are followed by more specific ones and the properties in the slots. Then we fill the instance’s slot value. Constructing the classes hierarchy high levels and their sub-classes, make a relation between each main class and its sub classes. Identify the main and sub-concepts, relations, slots values, and instances. One of the fundamental using of the Boolean operations “OR, AND” and “NOT” are axiomatic and Algebraic proofs. A Boolean algebraic function that is put in algebraic form can be simplified using Boolean algebraic axioms, laws and theorems. Simply we insert the following part of Boolean algebra in Ontology based. Table 1 is a summary of some Boolean algebra functions from [20].

Table 1 Part of Boolean algebra functions

According to our implemented case, the developed technique POQ evaluation tool focused on Boolean algebra rules’ axioms, laws and functions ontology modeling. The Ontology model represents Boolean algebra logic concepts, information and learning objects, which satisfy our system’s educational knowledge needs, arrange main concepts (axioms, laws, theorems) and their hierarchical concepts into classes and subclasses respectively i.e. Starts with defining the classes of Boolean algebra rules’ axioms, laws, theorems. Each class of these classes has a number of classes related to it represented as sub-classes (ex. axiom 1, axiom 2, …, theorem 1, theorem 2, …). A relation is defined between the main class and its sub classes. Each axiom, law and theorem contains two operations; one in OR Form and the other in AND Form. We make the values of these operations as an instance for a class that is related to, specifying the domain and the cardinality for each slot. We define the slots/properties values and fill the instance’s slot value for those classes. Protégé platform was used to introduce our ontology model, tools and semantic data. That because Protégé language is OWL (Ontology Web Language).

The OWL is suitable for web applications. We used protégé and its XML plug-in to save Axiom’s rules model as xml file format for import and export XML files to PHP code. The connection between Ontology based and POQ tool executes by using “simplexml_load_file” PHP function.

The visualization plug-in tool has been used to visualize our ontology modeling ex. jambalaya tool plug in. Figure 2 shows the Boolean algebra Rules’ Ontology visualization by Jambalaya tool.

Figure 2
figure 2

Boolean algebra rules’ ontology visualization model.

Handle POQ

The main contributions of the mathematical Proof Open Questions (POQ) evaluation part is to develop an intelligent semantic method to automate the POQs evaluation, and design a technique that contains ontology modeling in which users can interact with the knowledge represented by ontology in an appropriate way supported by their Drupal e-learning which is a CMS freely available under the GPL [21].

In this research, the POQ evaluation tool has a consistently universal mathematical syllabus ontology model for a number of mathematical courses each course has its independent ontology model (ex. Abstract algebra, geometric algebra, Boolean algebra, … ) those different ontologies hypothetically represent independent learning courses’ contents, aligned and merged them together with a matching and adequate concepts. Each course contains its belonging chapters, and each chapter contains a number of concepts available for different lessons, merging those ontologies together into a unified universal ontology model.

Now to present how the system evaluates the POQ, we present the following scenario:

The teacher is asked to set the POQ as X+X’+Y=X+Y in the input box in order to be sent to the students. Then the tool connects to Boolean algebra ontology base. The proposed “simplexml_load_file” PHP function code converts the XML ontology to the PHP variables as in Figure 3.

Figure 3
figure 3

A part of converter code XML to PHP.

Once the tutor sends the question, the student can receive it. Then student is asked to solve it and put constructed answers in input boxes as in Figure 4.

Figure 4
figure 4

Two students’ answers for the same question.

When the student submits these values into the system, the POQ’s answers are connected directly to the POQ evaluation tool. The tool converts the classes or subclasses of the axioms or theorem in the key column to xml file to connect with the PHP tool code. But if the student doesn’t insert any axioms in the key column the whole Boolean algebra ontology is converted.

As we mentioned earlier, our system contains the part of Axiom1 rule’s OWL/RDF ontology model which is further converted into the corresponding xml file.

When the student submit the first constructed solution item, the system checks both the rule and its value that exists in the right and left hand sides over the scripting file hierarchical tags. A score is gained for each correct solution item array, otherwise no scores are evaluated. Calculate the percentage scores’ values to be stored into the student’s DB.

These solutions’ items and scores could be previewed, thus the teacher receives the student’s answers and the evaluations for each correct solution.

The score function in our proposal tool sets value for each correct solution step, student gain a specific score value for each correct solution part. That value is increased as a student proceed in a correct steps, until student reach the final correct result, even if he/she doesn’t set all steps in details’ solutions, i.e. in case a student reaches the final solution result after any number of steps, then he could gain the score set for the problem. Figure 5 shows part of how score code checks solution arrays with rules arrays in each step. Although, the students can submit different solutions as in Figure 4 the POQ tool can evaluate it. In case of the correct answers, the POQ tool transfers to other scoring level. It is weight function as shown in Figure 6. By weight function we can measure the student’s ability of connected knowledge to receive to the correct answers. The function steps are as the following:

Figure 5
figure 5

The student’s score code.

Figure 6
figure 6

Part of weight function.

First step is generation the square matrix for used rules. Where, the matrix values are 0 if the two rules are independent else one i.e. the two rules in the same ontology class or not.

Second step is summation the weight matrix values.

Then the third step is comparison between the used rules number (n matrix dimension) and summation of weight matrix values.

The output is as in Figure 7 for solutions in Figure 4. Where, the score for each student is 100% but the first one uses rules- theorems, Axioms or Lemmas- independent but the second student uses the dependent rules.

Figure 7
figure 7

The score for solutions in Figure  4 .

But if the student’s answer is incomplete, then the score will be as shown in Figure 8. Also the developed system helps admin to easily delete, modify or update the courses’ contents by modifying/deleting learning objects or even introducing new concepts. This is to gain access and preview the learning objects.

Figure 8
figure 8

The student’s incomplete answer in part a and the student’s score in part b.

Handle MOQ

We implemented the proposed algorithm in section Multi Operations Question (MOQ) by PHP web scripting language and WampServer connecting scripts codes. PHP programming language enables us to build the evaluation matrix with multiple dimensional arrays which contains vectors and sets. There are varieties in the student solution vector length so using PHP language is moderate. We aren’t restricted to build n x n-matrix with n2-elements where each vector created contains its own set. Once we had mathematical question and their corresponding solution values, the matrix rows elements could be generated. The following example will show the implementation of the algorithm. The system student’s interface of online MOQ type is shown in Figure 9.

Figure 9
figure 9

An example of MOQ.

Suppose that the open mathematical question is I=2/2*4+1-2 and the final solution of I is equal to 3. The steps from the model answer are

i ) 2 / 2 = 1 , ii ) 1 * 4 = 4 , iii ) 4 + 1 = 5 , iv ) 5 2 = 3 .

We know that there are some arithmetic operations that have higher parentheses (executed *, / before +,-) more than the other operations. We has 4-operations/steps for 4-equation all should solve in correct order. The student answer (1,4,5,3) is correct 100%. But to be more specific, some students reach the final correct solution by summarize the number of steps/operations required. We take in consideration this point, so the first row in the evaluation matrix contains the detailed (standard) correct values (1,4,5,3) as a vector of array values, in other problems we may have two or more operations/steps that should be solved with the same parentheses. So we represent all the possible sets of solution for each step in its corresponding order. The 2nd row contains a number of correct elements less than the 1st one i.e. student’s solution values are (4,5,3) or (1,5,3) represent a vector of array values. On the other hand we represent these possible items in the evaluation matrix as ({1,4},5,3). Both arrays are combined together. So as we move down along the evaluation matrix the number of generated correct values expected to be in each row is less than the number of values that exist in the previous one. The last row has the final correct solution.

To compare the solution matrix to the student’s solution, If Z=3 i.e. the student array like the 3ed row in the evaluation matrix, for each similar values we give a score as a correct answer otherwise the score is zero. The total score is the sum of all scores.

We need to notice that each item/element in the teacher template (model answer) has one/set of values. The MOQ tool generates the matrix of all correct answers.

1 4 5 3 1 , 4 5 3 0 1 , 4 , 5 3 0 0 3 0 0 0

If the student’s answer array is one row in the generated matrix, then the following score function to calculate the percentage will run else the student’s answer is wrong and the percentage is zero (Figure 10). The score code in this type of question gives the same percentage for each step and the percentage depends on number of the complete answer’s steps. But it’s easy to make the percentage inserted from teacher’s side.

Figure 10
figure 10

MOQ score function code.

The score result is as in Figure 11 if the student wrote the final correct answer directly. Then the student missed the score for details step.

Figure 11
figure 11

The student’s score.

Conclusion and further works

This paper proposed an evaluation tool for hybrid exams which have POQ and MOQ types of open questions. The methodology is based on semantic e-learning. Using direct connection between the learning objects universal ontology based and the evaluation tool have many advantage as we don’t need model answers in POQ type and the student can use any correct theorem in her/his answer. Also using weight matrix solves the problem of dependence among learning concepts. Finally, using a fuzzy score matrix has benefits in case of MOQ which have order steps and could be answered by different ways.

In the future, In case of POQ the completed fuzzy weights matrix needs group of experts in the given field as mathematics therefore we recommend the authentication organization in education support a project to determine the weight of dependent among material concepts. Then when we want to create any open questions exam in the proof type, we connect directly with the Universal Ontology of learning materials and it’s Fuzzy Ontology which generates the fuzzy weight matrix automatically. Also we recommend the improvement of MOQ tool by creating AI questions bank. It will be Ontology based and connected it to the Universal Ontology for learning materials.

Authors’ information

Eman Elsayed was born in Athens, Greece. She received a B.Sc. degree in Mathematics and Computer science from the Cairo University, Egypt in 1994, a M.Sc. degree in computer science from the Cairo University, Egypt in 1999, and a Ph.D. degree in computer science from Alazher University, Cairo, Egypt in 2005. She works as a Lecture of computer science in Mathematics and computer science Department, Faculty of science, Alazhar University. She is also member of Egyptian Mathematical Society EMS and Intelligent Computer and Information systems Society ICIS. She has graduated nine Ph.D. and M.Sc. students. She has been the Chair of over ten international conferences. She has authored two books in computer science, and over twenty published papers in data mining, ontology engineering, e-learning, operating system and software engineering.

References

  1. Elsayed E: Supporting convergent technologies by ontology-based for BNIC e-science. AIML conference, UAE; 2011. 2011 2011

    Google Scholar 

  2. Ikdam A, Izzat A International journal of software engineering and its applications, vol 5, No. 4. In Automatic code homework grading based on concept extraction. Yarmouk University, Irbid, Jordan; 2011.

    Google Scholar 

  3. Elsayed E: Graphology expert system, VAK-test, and graphics for E-course development. Egyptian Computer Science Journal ECS 2010, 34(4):11–20.

    Google Scholar 

  4. Foong P: Using short open-ended mathematics questions to promote thinking and understanding. National Institute of Education, Singapore; 2002. 2002 2002

    Google Scholar 

  5. Larisa S, Riichiro M: Ontology of test. Osaka University, Japan; 2001. . Accessed 25 Oct 2012 http://www.ei.sanken.osaka-u.ac.jp/pub/larisa/402–115.pdf . Accessed 25 Oct 2012

    Google Scholar 

  6. Jesualdo T, Fernández B, Damián C-C: OeLE: applying ontologies to support the evaluation of open questions-based tests. 2006. . Accessed 12 Nov 2012 http://academic.research.microsoft.com/Publication/5639019/ . Accessed 12 Nov 2012

    Google Scholar 

  7. Farida B-D, Malik S-M, Catherine C: ODALA an ontological model for an automated evaluation of the Learner’s state of knowledge: application to a Web-based algorithmic teaching. In vol 4. No 1. Edited by: Bouarab D. Algeria; 2009. . Accessed 13 Feb 2013 http://www.knowledgetaiwan.org/ojs/index.php/ijbi/article/viewPDFInterstitial/202/54 . Accessed 13 Feb 2013

    Google Scholar 

  8. Konstantina C, Maria V: A knowledge representation approach using fuzzy cognitive maps for better navigation support in an adaptive learning system. Chrysafiadi and Virvou SpringerPlus 2013, 2: 81. doi:10.1186/2193–1801–2-81. http://www.springerplus.com/content/2/1/81 doi:10.1186/2193-1801-2-81. 10.1186/2193-1801-2-81

    Article  Google Scholar 

  9. Chang S-H, Lin P-C, Lin Z-C International forum of educational technology & society (IFETS). Measures of partial knowledge and unexpected responses in multiple-choice tests 2007, 95–109. http://ifets.info/journals/10_4/10.pdf

    Google Scholar 

  10. Rein P Proceedings of the ninth IEEE international conference on advanced learning technologies. In Prospects of automatic assessment of step-by-step solutions in algebra. IEEE Computer Society, Washington, DC, USA; 2009:535–537. 10.1109/ICALT.2009.123 10.1109/ICALT.2009.123

    Google Scholar 

  11. Mu L, Qian X, Zhang Z, Zhao G, Xu Y International conference on computer science and software engineering, vol 5. In An assessment tool for assembly language programming. IEEE Xplore Digital Library, Wuhan, Hubei; 2008:882–884. doi:10.1109/CSSE.2008.111 doi:10.1109/CSSE.2008.111

    Google Scholar 

  12. Miquel B, Robert N, Albert O, Enric R, Albert R: The barcelogic SMT solver. Lecture notes in computer science, vol 5123. In Proceedings of the 20th international conference on computer aided verification. Edited by: Gupta A, Malik S. Springer, Berlin, Heidelberg; 2008:294–298. http://www.springerlink.com/index/10.1007/978–3-540–70545–1

    Google Scholar 

  13. Thomas B, Diego C, David D, Pascal F Automated deduction CADE-22, lecture notes in computer science, vol 5663. In VeriT: an open, trustable and efficient SMT-solver. Springer, Verlag; 2009:151–156. http://www.springerlink.com/content/f33m4615152325x3

    Google Scholar 

  14. Dutertre B, De Moura L: The yices SMT solver. 2006. . Accessed 1 Dec 2012 http://yices.csl.sri.com/tool-paper.pdf . Accessed 1 Dec 2012

    Google Scholar 

  15. David D: Integration of SMT-solvers in B and event-B development environments. Universidade Federal do Rio Grande do Norte, Departamento de Informática e Matemática Aplicada, Natal, RN, Brazil; 2011.

    Google Scholar 

  16. Radoslav F, Michal R: Networked digital technologies, communications in computer and information science, Cairo, vol 88. Springer; 2010:883–888. doi:10.1109/ISDA.2010.5687151 doi:10.1109/ISDA.2010.5687151

    Google Scholar 

  17. Réka V: Educational ontology and knowledge testing. Electronic Journal of Knowledge Management, EJKM 2007, 5(1):123–130.

    Google Scholar 

  18. Emanuel R, Christian G, Heinz D Institute for information systems and computer media (IICM). In Deriving ontologies and assessment rubrics for electronic documents with human support for automatic assessment purposes. Graz University of Technology A-8010, Graz, Austria; 2010. 2010 2010

    Google Scholar 

  19. Lucía R, Milagros G, María L: Conceptualizing the e-learning assessment domain using an ontology network. International Journal of Artificial Intelligence and Interactive Multimedia 2012, 1(6):20–28. 10.9781/ijimai.2012.163

    Article  Google Scholar 

  20. Mukesh B: BOOLEAN ALGEBRA. . Accessed at Jen. 2013 http://wikieducator.org/images/5/5e/CS_Revision_KIT_(CBSE-XII).pdf . Accessed at Jen. 2013

  21. Bill F: Drupal for education and E-learning. Edited by: Akshara A. Birmingham, Mumbai: Packt Publishing Ltd.; 2008. First published: November 2008

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eman Elsayed.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors EE,KE and ST work together in conception, design, write and apply the proposed methods in this paper then they read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Elsayed, E., Eldahshan, K. & Tawfeek, S. Automatic evaluation technique for certain types of open questions in semantic learning systems. Hum. Cent. Comput. Inf. Sci. 3, 19 (2013). https://doi.org/10.1186/2192-1962-3-19

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2192-1962-3-19

Keywords