About Us   |   Submission   |   Editorial Team   |   Privacy Statement    |   Current Issue   |   Templates & Author Guidelines   |   Archives   |   Contacts

 

 

 

 

 

 

Phillip Oluwatobi Awodutire, Thomas Xavier, Joby K. Jose

Inferences on Stress Strength Reliability in Multicomponent System for Type I Generalized Half-Logistic Distribution

 

This article deals with inferences on stress strength reliability in a multicomponent system for Type I generalized half-logistic distribution. It is assumed that the strength and stress components are independently distributed. In this work, we develop some statistical properties of the type I generalized half-logistic distribution. Furthermore, the expression for stress strength reliability for a multicomponent setup was obtained and studied. Two methods to estimate the multicomponent stress-strength reliability -maximum likelihood and Bayesian estimation were employed. The Bayes estimates of the multicomponent stress strength reliability are obtained under squared error loss function and using gamma priors for the parameters. Simulation studies were conducted to assess the efficiency of the methods. The importance of this model was studied by applying it to a real life data set.

 

 

Ramajeyam Tharshan, Pushpakanthie Wijekoon

A New Mixed Poisson Distribution for Over-dispersed Count Data: Theory and Applications

 

In this paper, an alternative mixed Poisson distribution is proposed by amalgamating Poisson distribution and a modification of the Quasi Lindley distribution. Some fundamental structural properties of the new distribution, namely the shape of the distribution and moments and related measures, are explored. It was noted that the new distribution to be either unimodal or bimodal, and over-dispersed. Further, it has a tendency to accommodate various right tail behaviors and variance-to-mean ratios. Its unknown parameter estimation by using the maximum likelihood estimation method is examined by a simulation study based on the asymptotic theory. Finally, two real-world data sets are used to illustrate the flexibility and potentiality of the new distribution.

 

 

 Aijaz Ahmad, Muzamil Jallal, Afaq Ahmad

A Novel Approach for Constructing Distributions with an Example of the Rayleigh Distribution

 

In this paper, we describe a novel technique for creating distributions based on logarithmic functions, which we referred the Log Exponentiated Transformation (LET). The LET technique is then applied to Rayleigh distributions, resulting in a new distribution known as the Log Exponentiated Rayleigh distribution (LERD). Several distributional properties of the formulated distribution have been discussed. The expressions for ageing properties have been derived and discussed explicitly. The behaviour of the pdf, cdf and hazard rate function has been illustrated through different graphs. The parameters are estimated through the technique of MLE. A simulation analysis was conducted to measure the effectiveness of all estimators. Eventually the versatility and the efficacy of the formulated distribution have been examined through real life data set.

 

 

M. Manoharan, P. Kavya

A New Reliability Model and Applications

  

The Lomax or Pareto Type II distribution has a wide range of applications in many areas including reliability and life testing. In this paper, we modify the Lomax distribution using KM transformation to enhance the applicability of the Lomax distribution. The distribution introduced using KM transformation is parsimonious in parameter. Substituting the cumulative distribution function (cdf) of the Lomax distribution in KM transformation provides a new modified Lomax distribution. The behavior of hazard rate function is studied graphically and also theoretically using Glacer method. Its analytical properties are derived and parameters are estimated using maximum likelihood estimation method. We consider two real data sets to show the flexibility of the proposed model. The model proposed in this paper provides a better fit to the data sets compared to other well-known distributions given in this study.

 

 

Deepthy G S, Nicy Sebastian

A New Life Time Distribution: Burr III Modified Weibull Distribution and its Application in Burn in Process

 

In burn-in analysis, models with a bathtub-shaped hazard rate and a bimodal density function are inevitable. This work focusses on a new five parameter distribution called Burr III Modified Weibull distribution which can be used to design burn-in procedures and preventative maintenance for incurable devices. The statistical properties such as quantile function, hazard rate function and order statistics have been discussed. The model parameters are estimated using the maximum likelihood estimation technique, and the performance of the proposed model is evaluated using the simulation technique. Finally, a real data set is presented to demonstrate the models utility and its application in the burn-in process.

 

 

Tijjani A. Waziri, Bashir M. Yakasai, Rahama S. Abdullahi

Analysis of Some Proposed Replacement Policies

 

This paper is coming up with an age replacement cost model under the standard age replacement policy (SARP) for some multi-unit systems. Furthermore, some two other age replacement cost models will be constructed for the multi-unit systems under some proposed policies (policy A and policy B). For simple illustration of the proposed age replacement cost models under SARP, policy A and policy B, numerical example was provided, and the result obtained will be beneficial to engineers, maintenance managers and plant management, in selecting and applying the optimal preventive maintenance policies.

 

 

Komal Anadkat, Hiteishi Diwanji, Shahid Modasiya

Effect of Preprocessing in Human Emotion Analysis Using Social Media Status Dataset

 

Emotion analysis using social media text is the emerging research area now a day. It helps the researcher to recognize the emotional state of the users and identify mental health-relevant problems like depression or anxiety, which may lead to suicide if not cured. The social media platforms like WhatsApp, Facebook, Instagram, etc. are widely used as these applications provide an affordable and reliable medium for transferring data, sharing thoughts, and even for routine informal communication. Social media status is normally analyzed to recognize the mood, emotion, thought process, or mental state of the individual as people generally share status for what they feel. On the other hand, pre-processing is the crucial step for any kind of text data analysis. In this paper, the social media status dataset is first pre-processed using various methods, given for feature extraction and classification purpose. For the machine learning approach, we have used count vectors and TF-IDF techniques for extracting the different features of the data. Using count vector feature extraction accuracy achieved by pre-processed data is 68.90%, 69.33%, 70.59%, 64.95%, 69.33% for naïve Bayes, LDA, Random forest, SGD and MLP respectively. Similarly, using TF-IDF feature extraction accuracy achieved by pre-processed data is 65.76%, 69.96%, 68.49%, 65.96%, 70.80% for naïve Bayes, LDA, Random forest, SGD and MLP respectively. The experimental results show that pre-processing helps to improve the accuracy of the classifier and CNN outperforms the traditional approach and achieves 79% accuracy

 

 

Pradeep Chaudhary, Anika Sharma

A Two Non-Identical Unit Parallel System With Priority in Repair and Correlated Life Times

 

The paper analyses a two non-identical unit parallel system in respect of various measures of system effectiveness by using regenerative point techniques. It has been considered that the life times of both the units are correlated random variables and a single repairman is always available with the system to repair a failed unit.

 

 

Naveen Kumar, S.C. Malik, N. Nandal

Stochastic Analysis of a Repairable System of Non-Identical Units With Priority and Conditional Failure of Repairman

 

Here, we describe the stochastic analysis of a repairable system consisting of two non-identical units called the main unit and the other is a duplicate unit. The units have direct complete failure from the operative state. A single repairman has been engaged to carry out the repair activities that can be failed while performing his jobs with the main unit. The repairman does repair activities of the duplicate unit without any problem. Priority for operation and repair to the duplicate unit is given over the main unit. The repairman performs with full efficiency after getting treatment. The distribution for failure rates of the units has been considered as negative exponential while arbitrary distributions have been taken for repair and treatment rates. The use of semi-Markov process and regenerative point technique has been made to study the probabilistic behavior of the system in different possible transition states. The reliability characteristics of the system model have been examined numerically and graphically for particular values of the parameters. The profit of the system has also been analyzed for some fixed values of the repair and other maintenance costs.

 

 

Joseph Thomas Eghwerido, Eferhonore Efe-Eyefia

The Reliability Performance of the Exponential Inverted Marshall-Olkin-G Family of Distributions: Non-Bayesian Properties and Applications

 

This article introduces a class of generator for enhancing the performance, productivity and flexibility of statistical distributions called the exponential Inverted Marshall-Olkin-G (EMA-G) distribution. The characteristics of the new class of generator were obtained and examined. Some special models of the proposed model were investigated. The Bernstein function of the EMA-G model was also obtained in a closed form. The maximum likelihood method was adopted to obtain the parameters estimate of the formulated EMA-G distribution model. The flexibility, productivity, tractability, applicability, and viability of the new contemporary class of distribution were examined by Monte Carlo simulation. A two real life data sets were used to illustrate the empirical performance and flexibility, productivity, tractability of the generator. The up-to-the-minute outcomes of the new generator indicated that the EMA-G density gives a better fit compared to some existing statistical generators in literature using their goodness-of-fit.

 

 

K. Jyothsna, P. Vijaya Laxmi, P. Vijaya Kumar

Optimization of a Feedback Working Vacation Queue With Reverse Balking and Reverse Reneging

 

This paper analyzes a steady-state finite buffer M/M/1 feedback queue with reverse balking, reverse reneging and multiple working vacations. The concept of reverse balking and reverse reneging evolves from investment businesses wherein more the number of customers associated with a firm less the probability of balking of a customer and similar is the case of reverse reneging. Furthermore, if a customer is dissatisfied with the service provided, he or she may chose to rejoin the queue as a feedback customer. The server exits for working vacations whenever the system becomes empty instead of staying idle in the system. Vacation times and service times during working vacations are all independent random variables following exponential distribution. The models steady-state system length distributions are calculated using the matrix approach. Some performance characteristics and cost optimization using ant colony optimization (ACO) are presented. Sensitivity analysis is performed using numerical results which are shown in the form of tables and graphs.

 

 

Tuzun Tolga İnan - Neslihan Gokmen İnan

Analysis of the Primary Factors Affecting the Most Fatal Aviation Accidents: A Machine Learning Approach

 

The safety concept is primarily examined in this study considering the most fatal accidents in aviation history with human, technical, and sabotage/terrorism factors. Although the aviation industry was started with the first engine flight in 1903, the safety concept has been examined since the beginning of the 1950s. However, the safety concept was firstly examined with technical factors, in the late 1970s, human factors have started to analyze. Despite these primary causes, there have other factors which could have an impact on accidents. So, the purpose of the study is to determine the affecting factors of the most fatal 100 accidents including aircraft type, distance, flight phase, primary cause, number of total passengers, and time period by classifying survivor/non-survivor passengers. Logistic regression and discriminant analysis are used as multivariate statistical analyses to compare with the machine learning approaches in terms of showing the algorithms’ robustness. Machine learning techniques have better performance than multivariate statistical methods in terms of accuracy (0.910), false-positive rate (0.084), and false-negative rate (0.118). In conclusion, flight phase, primary cause, and total passenger numbers are found as the most important factors according to machine learning and multivariate statistical models for classifying the accidents’ survivor/non-survivor passengers.

 

 

Agni Saroj, Prashant K. Sonker, Mukesh Kumar

Statistical Properties and Application of a Transformed Lifetime Distribution: Inverse Muth Distribution

 

In this paper, we have proposed a transformed distribution called inverse Muth (IM) distribution. The expressions for probability density function (pdf), cumulative distribution function (cdf), reliability and hazard function of this distribution are well defined. The statistical properties such as, quantile function, moments, skewness and kurtosis are derived. The methods of estimation such as maximum likelihood estimation (MLE) and maximum product spacing estimation (MPSE) are used to estimate the parameters. The IM distribution is positively skewed and its behavior of hazard rate is upside-down bathtub (UBT) shape. The important finding of the study is that the moments of IM distribution do not exist. A real dataset (the active repair time for airborne communication transceiver) used for application purpose, after taking a natural extension of IM distribution. It is expected that the proposed model would be used as a life time model in field of reliability and its applicability.

 

 

Sanket B. Suthar, Amit R. Thakkar

Hybrid Deep Resnet With Inception Model for Optical Character Recognition in Gujarati Language Short Title: Optical Character Recognition in Gujarati Language

 

In the Optical Character Recognition (OCR) system, achieving high recognition performance is important. OCR and visual perception are affected by the inclined characters in each language. Deep learning methods play an important role in the OCR field, which can outperform humans with higher recognition performance. So, in this research, a hybrid deep learning technique is applied to recognize the Gujarati language characters. Initially, Gujarati characters collected from different sources are pre-processed using different techniques. Adaptive Weiner Filter (AWF) is used for noise removal, Binarization, and contrast enhancement is done by Contrast Limited Adaptive Histogram Equalization (CLAHE) method. Finally, a hybrid deep ResNet with Inception model (GoogleNet) is suggested to perform character recognition in the Gujarati language. This hybrid architecture also performs feature extraction tasks, considered a major task in OCR. Python tool is utilized to illustrate the proposed methodology and solve the mathematical model. Scanned documents containing Gujarati characters are engaged to evaluate the robustness of the proposed methodology. Using various performance parameters, the influence of the proposed methodology is examined and its results compared with various deep learning algorithms.

 

 

Divesh Garg, Reena Garg

Reliability Analysis and Profit Optimization of Briquette Machine by Considering Neglected Faults

 

Sustainable energy plays a significant role in socio-economic advancement by raising the standard of living of all human beings. Briquetting is the process of compaction of biomass residues into solid fuels in order to increase the effectiveness of thermal capacity, combustion rate, calorific value to name a few. In this paper, we consider not only the occurrence of minor/ major faults but also the other neglected faults such as abnormal sound, overheating of the motor unit, vibration, etc. Such neglected faults may not affect the working of the system at a time but their ignorance may convert into major faults in the future. An ordinary repairman can easily rectify all machine faults except some major faults for which an expert repairman is required. Moreover, we analyse the availability of the system and optimize system profit by using the Artificial Bee Colony optimization algorithm. Furthermore, a graphical study of these parameters is presented.

 

 

M. A. Lone, I. H. Dar, T. R. Jan

A New Method for Generating Distributions with an Application to Weibull Distribution

 

In the literature of probability theory, it has been noticed that the classical probability distributions do not furnish an ample fit and fail to model the real-life data with a non-monotonic hazard rate behaviour. To overcome this limitation, researchers are working in the refinement of these distributions. In this paper, a new method has been presented to add an extra parameter to a family of distributions for more flexibility and potentiality. We have specialized this method to two-parametric Weibull distribution. A comprehensive mathematical treatment of the new distribution is provided. We provide closed-form expressions for the density, cumulative distribution, reliability function, hazard rate function, the r-th moment, moment generating function, and also the order statistics. Moreover, we discussed mean residual life time, stress strength reliability and maximum likelihood estimation. The adequacy of the proposed distribution is supported by using two real lifetime data sets as well as simulated data.

 

 

S. Suganya, K. Pradeepa Veerakumari

Skip-Lot Sampling Plan of Type Sksp-T With Group Acceptance Sampling Plan as Reference Plan Under Burr-Type Xii Distribution

 

This paper clearly assigns skip-lot sampling plan of type SkSP-T with Group Acceptance sampling plan is designing and Burr type XII distribution is applied to determine the lifetime of the product. The new proposed plan parameters are determined by using the two-point method on the Operating Characteristics curve together with consistent producer and consumer risks are specified. Tables are simulated for various parametric values of SkSP-T, Group acceptance sampling plan and Burr type XII distribution. Skip-lot sampling plan of type SkSP-T is also compared with Group acceptance single sampling plan and skip-lot sampling plan of type SkSP-2 with group acceptance sampling plan using Burr type XII distribution. Further, the efficiency of the proposed plan is discussed. Numerical illustration and examples are given to justify the efficiency of the proposed plan.

 

 

Shreya Bhunia, Proloy Banerjee

Some Properties and Different Estimation Methods for Inverse A(_) Distribution with an Application to Tongue Cancer Data

 

The inverted distribution is the distribution of the reciprocal of a random variable that follows a specified distribution. Here, a new one parameter inverse A(_) distribution has been introduced, which is the reciprocal of the A(_) distribution. An account of mathematical and statistical properties of the new distribution such as survival characteristics, quantile functions, mode, order statistics, ageing intensity function and stochastic ordering have been derived and discussed. Furthermore, from the frequentist view point we discussed several estimation approaches including maximum likelihood method, method of maximum product of spacings, ordinary and weighted least square methods, Cram/er-Von-Mises estimation and Anderson-Darling estimation methods. These methods are compared for both small and large samples by performing an extensive numerical simulation. The flexibility of the new lifetime distribution is demonstrated by modeling a tongue cancer data. The result indicates the superiority for proposed model compared to some popular competing ones.

 

 

Rodrigo F. S. Gomes, Leandro Gauss, Fabio Sartori Piran, Daniel Pacheco Lacerda

Safety at Work: A Complex or an Exceedingly Simple Matter?

 

This paper uses the concept of inherent simplicity stemming from the Theory of Constraints to explain whether safety at work is a complex or an exceedingly simple matter. In this context, the study seeks to explore the causalities that govern safety at work, identifying its constructs and presenting logic propositions based on the theory-building blocks: classification, correlation, and causal consistency. To support the research, a dataset composed of 46 work-related accident investigation reports from an elevator industry in Latin America was carefully analyzed using association rules. Moreover, direct observations grounded on inductive reasoning were used to speculate plausive causes concerning the effect of work-related accidents. The research strategy followed common strategies of theory building to reach common sense: theory-to-practice and practice-to-theory. As a result, a conceptual proposition is postulated based on the reasoning that safety at work is governed by very few constructs, and that its complexity is explained through the two elements from inherent simplicity: degrees of freedom (interdependencies between constructs) and harmony (conflicts resolution within the work environment). From the practitioners’ perspective, the study also offers directions towards safety improvements at the organizational level by considering the impact of the interdependencies between constructs in safety at work.

 

 

Elebe E. Nwezza, Uchenna U. Uwadi, C.K. Acha, Christian Osagie

Gumbel Marshall-Olkin Lomax: A new Distribution for Reliability Modelling

 

A new distribution for modeling the two approaches (physical and actuarial) of reliability problems is introduced. The statistical properties including the moments, mode, quantile function are derived. Some reliability measures including the mean residual life and hazard rate are derived. An alternative measure for total time of test (TTT) for evaluation of the interfailure times is drived. The unknown parameters of the new distribution are estimated using the maximum likelihood approach. Furthermore, the asymptotic consistency of the estimated parameters is evaluated through a simulation study. Two real-life datasets were used to illustrate the applicability of the new distribution and comparison with already existing distributions.

 

 

D. Kumar, P. K. Chaurasia, P. Kumar, A. Chaurasia

A Novel Transformation: Based on Inverse Trigonometric Lindley Distribution

 

As we see that the present era is directly depending upon various kinds of machines. In other words, we can say that we are fully surrounded by machines. Machines are assembled with many components and each component has its own importance. For proper functioning of a machine, these components should be up to date. Therefore, for smooth functioning, we have to make replacement of the component before its failure. In this present paper, we propose a new transformation which is purely based on inverse trigonometry with lindley distribution for the first time and so, named "Inverse Trigonometric Lindley Distribution". It find its various properties like survival function, hazard rate function, moments, conditional moments, order statistics, entropy measurement etc. Maximum likelihood estimator have also considered for estimation of parameter. To know the paternal behavior of the model, different real datasets have been considered. To understand the behavior of estimators at the long run, simulation study is being performed in detail.

 

 

S.J Ayalakshmi, S. Vijilamery

Study on Acceptance Sampling Plan for Truncate Life Tests Based on Percentiles Using Gompertz Frechet Distribution

 

In this paper, Acceptance Sampling approaches useful for minimizing the cost and time of the submitted lots. In this busy world expect the Quality assurance and reliability of the product is very high. So, use the truncated life tests in acceptance sampling plan. Time truncated life tests in sampling plan are used to certain reach a decision on the product. Therefore, Gompertz Frechet Distribution is considered as model for a life time random variable when the lifetime test is truncated at pre-determined time. The operating characteristic functions of the sampling plans and Producers risk is also discussed. The results are illustrated by an example.

 

 

Neelam Singla, Sonia Kalra

Explicit Time Dependent Solution of a Twostate Retrial Queueing Model with Heterogenous Servers

 

In this paper, two-dimensional state retrial queueing system with two non - identical parallel servers is considered. Incoming calls (primary calls) arrive at the server according to a Poisson process. Repeating calls also follows the same fashion. Service times of two servers follow exponential distribution with different rates. An incoming call that finds the servers busy, joins an orbit and retries after some random amount of time. Time dependent probabilities of exact number of arrivals and exact number of departures at when the servers are free or when one server is busy or when both servers are busy are derived for the system. Finally busy period distribution obtained to illustrate the system dynamics.

 

 

Bhupendra Singh, Varun Agiwal, Amit Singh Nayal, Abhishek Tyagi

A Discrete Analogue of Teissier Distribution: Properties and Classical Estimation with Application to Count Data

 

This article presents a novel discrete distribution with a single parameter, called the discrete Teissier distribution. It is noted that this model, with one parameter, offers a high degree of fitting flexibility as it is capable of modelling equi-, over-, and under-dispersed, positive and negative skewed, and increasing failure rate datasets. In this article, we have explored its numerous essential distributional features such as recurrence relation, moments, generating function, index of dispersion, coefficient of variation, entropy, survival and hazard rate functions, mean residual life and mean past life functions, stress-strength reliability, order statistics, and infinite divisibility. The classical point estimators have been developed using the method of maximum likelihood, method of moment, and least-squares estimation, whilst an interval estimation based on Fishers information has also been presented. Finally, the applicability of the suggested discrete model has been demonstrated using two complete real datasets.

 

 

Intekhab Alam, Mohd Asif Intezar, Lalit Kumar Sharma, Mohammad Tariq Intezar, Aqsa Irfan

Costs of Age Replacement under Accelerated Life Testing with Censored Information

 

Accelerated life testing (ALTg) helps manufacturers to predict the various costs associated with the product under the warranty policy. The main aim of undertaking ALTg is the extended time of today’s manufactured goods, the small-time among design and make public, and the difficulty of analysis of items that are continuously used in ordinary environments. Hence ALTg is used to offer quick information about the life distribution of products. We describe how to propose and analyze the accelerated life testing plans to develop the excellence and reliability of the item for consumption. We also focus on finding the expected cost rate and the expected total cost for age replacement in the prorate rebate warranty plan. The problem is studied using constant stress, under the hypothesis that the life spans of the units follow the Gompertz distribution (GD) for predicting the cost of age replacement in the warranty plan. The asymptotic variance and covariance matrix, confidence intervals for parameters, and respective errors are also obtained. A simulation study is carried out to show the statistical properties of distribution parameters.

 

 

R. Vijayaraghavana, A. Pavithrab

Selection of Life Test Sampling Inspection Plans for Continuous Production

 

Reliability sampling is the methodology often used in manufacturing industries for making decision about the disposition of lots of finished products based on the information generated from a life test. Such a methodology can be applied effectively for isolated lots as well as for a continuous stream of lots through the life tests to ensure control over the quality characteristics that are mainly related to the functioning of the manufacturing items in time. Sampling inspection plans for isolated lots are classified under lot-by-lot inspection procedures. Cumulative results plans are classified under the sampling inspection for continuous production, which results in continuous stream of lots. This paper presents the notion of life tests for cumulative results plans with a particular reference to chain sampling inspection plans when the lots are formulated from a continuous stream of production. The operating characteristic (OC) function of chain sampling plans for life tests is presented as a measure of performance when the lifetime random variable follows an exponential distribution. A procedure for designing the proposed plans indexed by two points on the characteristic curve for providing protection to the producer and consumer is discussed with illustrations. Tables yielding the parameters of the optimum plans are also provided.

 

 

S. Priyadharshini, G. Deepa

Critical Path Interms of Intuitionistic Triangular Fuzzy Numbers Using Maximum Edge Distance Method

 

We live in a contemporary world where successful project management strategies are complex to manipulate the projects for project managers and decision-makers. It is essential to pinpoint strategies so that managers can accomplish projects and polish off them within a predetermined period of time and resource restrain. This research assists us to detect the critical path in an acyclic network in terms of intuitionistic triangular fuzzy numbers, we have proposed the “maximum edge distance” method. Forward and backward algorithms are designed to find the optimal path for the proposed method. Numerical examples are also illustrated for the same. Verification is done using the path length ranking technique. Simulation results are included by the use of the C program and MATLAB. Finally, the comparison is made with the traditional forward and backward pass (existing method) technique to point out the conclusion.

 

Dhaval Bhoi, Amit Thakkar

Sentiment Analysis Performance and Reliability Evaluation Using an XLNet-based Deep Learning Approach

 

Online reviews are now a global form of communication between consumers and E-commerce companies. When it comes to making day-to-day decisions, customers rely heavily on the availability of internet reviews, as well as their trustworthiness and performance. Due to the unique qualities of user reviews, customers are finding it increasingly difficult to define and examining the authenticity and reliability of sentiment evaluations. These sentiment classifications for user reviews can aid in understanding user feelings, review dependability, and customer perceptions of movie items. Deep Learning is a strong technique for learning several layers of data representations or features. When compared to traditional machine learning approaches, deep learning techniques yield better results. To assess, analyze, and weight the usefulness of each review comment, we employed the XLNet Deep Learning Model Approach on balanced movie review dataset. Experimental result demonstrates that the proposed deep learning model achieves higher performance evaluation than those of other classifiers.

 

 

Jerin Paul, P. Yageen Thomas

Sharma-Mittal Entropy Properties on Generalized (k) Record Values

 

In this paper, we derive Sharma-Mittal entropy of generalized (k) record values and analyse some of its important properties. We establish some bounds for the Sharma-Mittal entropy of generalized (k) record values. We generate a characterization result based on the properties of Sharma-Mittal entropy of generalized (k) record values for the exponential distribution. We further establish some distribution-free properties of Sharma-Mittal divergence information between the distribution of a generalized (k) record value and the parent distribution. We extend the concept of Sharma-Mittal entropy to the concomitants of generalized (k) record values arising from a Farlie-Gumbel-Morgenstern (FGM) bivariate distribution. Also, we consider residual Sharma-Mittal Entropy and used it to describe some properties of generalized (k) record values.

 

 

Therrar Kadri, Souad Kadri, Seifedine Kadry, Khaled Smaili

The New Mixed Erlang Distribution: A Flexible Distribution for Modeling Lifetime Data

 

We introduce a new mixed distribution of the Erlang distribution that is generated from the convolution of the Extension Exponential distribution denoted by the Mixed Erlang distribution (ME). We derive an exact closed expression of the probability density function which is used to obtain closed expressions of the cumulative function, reliability function, hazard function, moment generating function and kth moment. The method of maximum likelihood and method of moments is used for estimating the model parameters. Two applications to real data sets are given to illustrate the potentiality of this distribution.

 

 

R. Vijayaraghavan, A. Pavithra

Construction of Life Test Sampling Inspection Plans by Attributes Based on Marshall – Olkin Extended Exponential Distribution

 

A life test is a random experiment conducted on the manufactured items such as electrical and electronic components for estimating their life time based on the inspection of randomly sampled items. Life time of the items is a random variable which follows a specific continuous-type distribution, called the lifetime distribution. Reliability sampling, which is one among the classifications of product control, deals with inspection procedures for sentencing one or more lots or batches of items submitted for inspection. In this paper, the concept of sampling plans for life tests involving two samples is introduced under the assumption that the life time random variable is modeled by Marshall - Olkin extended exponential distribution. A procedure is developed for designing the optimum plan with minimum sample sizes when two points on the desired operating characteristic curve are prescribed to ensure protection to the producer and the consumer.

 

 

Mohammad Ahmad, Ahteshamul Haq, Abdul Kalam, Sayed Kifayat Shah

A Comparative study of outlier detection of Yamuna River Delhi India by Classical Statistics and Statistical Quality Control

 

Water quality control aids in preventing pollution, public health, and the preservation and improvement of the biological integrity of water bodies. Water quality involves many variables and observations, some of which are outside of the acceptable range. An observation that apart from the rest of the data or looks diverge from other observation of the sample in which it occurs. In this paper, we proposed two methodologies for detecting outliers for the Yamuna River water quality data with three variables Chemical Oxygen Demand (COD), Bio-chemical Demand Oxygen (BOD) and PH, at three different locations did comparison of these two methodologies. These two methodologies are based on Descriptive Statistics and Statistical Process Control (SPC). A few outliers are present in the data. The outcome shows how far the outlier detection method has progressed and better knowledge of the various outlier methodologies and provide a clear path for future outlier detection methods for researchers.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Safety Research :

 

 

Safety, Risk, Reliability and Quality:

 

 

Statistic, Probability and Uncertainty :