Friday, April 5, 2019
Comparative Study of Advanced Classification Methods
Comparative Study of Advanced Classification MethodsCHAPTER 7TESTING AND RESULTS7.0 mental institution to software system mental footraceingingSoftware adjudicateing is the surgical operation of executing a course of study or frame with the intent of finding errors or termed as bugs or, it involves any activity aimed at evaluating an attri preciselye or efficiency of programming system and ascertain that it meets its required results. Software bugs allow for almost always exist in any software module with moderate size not because programmers are careless or irresponsible, but because the complexity of software is generally intractable and humans have only limited ability to tweak complexity. It is also true that for any complex systems, design defects can never be completely rule out.7.2 dischargeing ProcessThe basic goal of the software development process is to produce data that has no errors or very few errors. In an effort to detect errors soon after they are intr oduced, for each one phase ends with a verification activity such as review. However, most of these verification activities in the earliest phases of software development are based on human evaluation and cannot detect all errors. The sampleing process starts with a test plan. The test plan specifies all the test eccentric persons required. Then the test building elude is executed with the test issues. Reports are produced and analyzed. When testing of some unit complete, these tried units can be combine with other un time-tested modules to form new test units. Testing of any units involves the followingPlan test fibresExecute test cases andEvaluate the result of the testing7.3 Development of Test slicknesssA test case in software engineering is a set of conditions or variables under which a tester will determine whether an application or software system is correctly working or not. The mechanism for determining whether a software program or system has so longed or faile d such a test is cognise as a test oracle.Test Cases follow certain format, given as followsTest case id Every test case has an identifier uniquely associated with certain format. This id is used to track the test case in the system upon execution. Similar test case id is used in defining test script.Test case Description Every test case has a description, which describes what functionality of software to be tested.Test Category Test category defines business test case category like functional tests, negative test, accessibility test usually these are associated with test case id.Expected result and the actual result These are employ within respective API. As the testing is done for the web application, actual result will be available within the web page.Pass/fail Result of the test case is either pass or fail. Validation occurs based on expected and actual result. If expected and actual results are kindred whence test case passes or else failure occurs in test cases.7.4 Testing of Application SoftwareThe miscellaneous testing done on application software is as follows. integrating Testing7.4.1 Integration TestingIn this phase of software testing individual software modules are combined and tested as a group. The purpose of integrating testing is to verify functional, performance and reliability requirements placed on major design items. These design items, i.e. assemblages (or unit group of units), are exercised through their interfaces using black box seat testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing.The overall idea is a building block ap proach, in which verified assemblages are added to a verified base which is then used to support the integration testing of further assemblages, In this approach, all or most of the developed modules are coupled unneurotic to form a complete software system or major part of the system and then used for integration testing. Integration testing is a systematic technique for constructing the program structure eon at the same time conducting test to uncover errors associated with interfacing. The objective is to take unit-tested modules and build a program structure that has been dictated by design.The top-down approach to integration testing requires the highest-level modules be tested and combine first. This allows high-level logic and data flow to be tested early in the process and it tends to derogate the privation for drivers. The bottom-up approach requires the lowest-level units be tested and integrate first. These units are frequently referred to as public-service corpora tion modules. By using this approach, utility modules are tested early in the development process and the need for stubs is minimized. The third approach, sometimes referred to as the umbrella approach, requires testing along functional data and control-flow paths. First, the inputs for functions are integrated in the bottom-up pattern.7.4.1.1 Test Cases for Support Vector MachineSupport Vector Machine is tested for the associates which fall only on positive case of hyperplane, attributes which fall only on negative side of hyperplane, attributes which fall on both positive and negative side of hyperplane and the attributes which fall on the hyperplane. The expected results accord with the actual results. pillle 7.1 Test Cases for Support Vector Machine7.4.1.2 Test Cases for credulous Bayes ClassifierNaive Bayes Classifier is tested for the attributes which belongs to only class 1, attributes which belongs to only class -1, attributes which belongs to both class 1 and class -1. The expected results match with the actual results.Table 7.2 Test Cases for Naive Bayes Classifier7.5 Testing Results of Case StudiesA particular example of something used or analyzed in order to depict a thesis or principle. It is a documented study of real life situation or of an conceptional scenario.7.5.1 Problem Statement Haberman DatasetHaberman data set contains cases from the University of Chicagos Billings Hospital on the survival of patients who had undergone surgery for breast cancer. The job is to determine if the patient survived 5 years or longer (positive) or if the patient died within 5 year (negative).relation habermanattribute Age integer 30, 83attribute Year integer 58, 69attribute lordly integer 0, 52attribute Survival positive, negativeinputs Age, Year, Positiveoutputs SurvivalTraining SetTest SetWeight transmitter and gammaw =0.09910.07750.2813gamma = 0.3742Predicted Class label of test set murkiness matrix of the classifier rightful(a) Positive(TP)=8.000000 False Negative(FN)=27.000000False Positive(FP)=8.000000True Negative(TN)=110.000000AUC of Classifier = 0.517792Accuracy of classifier = 77.124183 delusion rate of classifier = 22.875817F_score=31.372549Precision=50.0Recall=22.857143Specificity=93.220339Confusion Matrix for SVMFig 7.1 Bar chart of SVM for various work MetricPredicted Class Label of Naive Bayes ClassifierTrue Positive(TP)=10.000000False Negative(FN)=25.000000False Positive(FP)=11.000000True Negative(TN)=107.000000AUC of Classifier = 0.5202Accuracy of Classifier =76.4706Error station of Classifier = 23.5294F_score=35.7143Precision=47.6191Recall=28.5714Specificity=90.678Confusion Matrix for NBCFig 7.2 Bar Chart of NBC for various Performance MetricTab 7.3 Comparison of SVM and NBC for various Performance Metric Fig 7.3 Bar Chart for Comparison of SVM and NBC7.5.2 Titanic Data setThe big dataset gives the values of four attributes. The attributes are social class (first class, second class, third class, and crew memb er), age (adult or child), sex, and whether or not the person survived.relation titanicattribute Class real-1.87,0.965attribute Age real-0.228,4.38attribute Sex real-1.92,0.521attribute Survived -1.0,1.0inputs Class, Age, Sexoutputs SurvivedTraining SetTest Setw = -0.10250.0431 -0.3983gamma = 0.3141Predicted Class label of test setconfusion matrix of the classifierTrue Positive(TP)=154.000000False Negative(FN)=181.000000False Positive(FP)=64.000000True Negative(TN)=701.000000AUC of Classifier=0.426392Accuracy of classifier in test set is=77.727273Error rate of classifier in test set is=22.272727F_score=55.696203precision=70.642202Recall=45.970149specificity=91.633987Confusion Matrix for SVMFig 7.4 Bar chart of SVM for various Performance MetricPredicted Class label of Naive Bayes ClassifierTrue Positive(TP)=197.000000False Negative(FN)=138.000000False Positive(FP)=148.000000True Negative(TN)=617.000000AUC of Classifier = 0.4782Accuracy of Classifier = 74Error Rate of Classifier = 26 F_Score = 57.9412Precision = 57.1015Recall = 58.806Specificity = 80.6536Confusion Matrix for NBCFig 7.5 Bar chart of NBC for various Performance MetricTab 7.4 Comparison of SVM and NBC for various Performance MetricFig 7.6 Bar Chart for Comparison of SVM and NBCDepartment of CSE, RNSIT2014-15Page 1
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment