Sources on this Page

> Headlines by Category

 Home / Science / Mathematics

You are using the plain HTML view, switch to advanced view for a more complete experience.


ci = ibootci(nboot,bootfun,...) computes the 95% iterated bootstrap confidence interval of the statistic computed by bootfun. nboot is a scalar, or vector of upto two positive integers indicating the number of replicate samples for the first and second bootstraps. bootfun is a function handle specified with @, or a string indicating the function name. The third and later input arguments are data (column vectors), that are used to create inputs to bootfun. ibootci creates each bootstrap by sampling with replacement from the rows of the column vector data arguments (which must be the same size). Two-sided percentile confidence intervals [1] are then calibrated to achieve second order accurate coverage [2,3]. The detail of the algorithm used is described clearly in reference [3]. The iterated bootstrap is also commonly referred to as the double bootstrap.
ci = ibootci(nboot,{bootfun,...},'alpha',alpha) computes the 100*(1-alpha) iterated bootstrap confidence interval of the statistic defined by the function bootfun. bootfun and the data that ibootci passes to it are contained in a single cell array. alpha is a scalar between 0 and 1. The default value of alpha is 0.05.

ci = ibootci(nboot,{bootfun,...},...,'mode',mode) performs bootstrap computations in the specified run-mode: 'fast' (vectorized) or 'slow' (loop). The default run-mode is 'fast', which is suitable for most applications with small-to-medium sample sizes. However, running large samples in fast mode may exhaust the available random-access memory (RAM). The problem can be remedied by running ibootci in 'slow' mode, which demands much less memory but is significantly slower.

[ci,bootstat] = ibootci(...) also returns the bootstrapped statistic computed for each of the nboot first bootstrap replicate samples. Each row of bootstat contains the results of applying bootfun to one replicate sample from the first bootstrap.

[ci,bootstat,alpha] = ibootci(...) also returns the adjusted nominal alpha level following calibration by a second bootstrap. If no bootstrap iteration is called, the nominal alpha level will be no different from the user-specified (or default) alpha level.

Default values for the number of first and second bootstrap replicate sample sets are 2000 and 200 respectively. With these settings, calibrated coverage levels are rounded up to the nearest percent, which can in effect reduce coverage errors [3]. When the coverage level calibrates to 1, this function returns a warning that it has hit the ends of the bootstrap distribution. This signifies that the returned intervals could have inaccurate coverage. Typically, this situation occurs when the sample size is too small for the specified alpha level. This can vary according to the statistic calculated by bootfun and the number of replicate samples in the second bootstrap.

[1] Efron (1979) Bootstrap methods: another look at the jackknife.
 Annals of Statistics. 7(1): 1-26
[2] Lee and Young (1995) Asympototic Iterated Bootstrap Confidence
 Intervals. Annals of Statistics. 23(4): 1301-1330.
[3] Lee and Young (1999) The effect of Monte Carlo approximation
 on coverage error of double-bootstrap confidence intervals.
 Journal of the Royal Statistical Society. Series B. 61(2):


If x is a vector, find the sample quartile of x. If x is a 2D matrix, perform the operation for each column and return them in a row vector. If the optional argument dim is given, operate along this dimension.
Hinges for the lower and upper quartiles are computed by an algorithm [1] that is a blend of Tukey's [2] and Moore and McCabe's [3] methods. This approach ensures that quartiles change gradually as the sample size of x increases.
If quartile equals 0, the minimum value is returned.
If quartile equals 1, the hinge for the first quartile is returned.
If quartile equals 2, the median value is returned.
If quartile equals 3, the hinge for the third quartile is returned.
If quartile equals 4, the maximum value is returned.
If quartile < 0 or if quart > 4, an error is returned.
By default, the function handle func is @median and no smoothing of the quartiles is performed. If the sample size is relatively small, smoothing of the quartiles may be appropriate. If the handle input argument is set to @smoothmedian, then quartiles 1-3 are smoothed. Smoothing of the median is performed as described in [4]. For first and third quartiles, the smoothed median is computed from the lower and upper halves of x respectively. Smoothing requires the smoothmedian function, which is also available on Matlab CENTRAL File Exchange.

[1] https://en.wikipedia.org/wiki/Quartile
[2] Hoaglin, D.; Mosteller, F.; and Tukey, J. (Ed.). Understanding
     Robust and Exploratory Data Analysis. New York: Wiley, pp. 39,
     54, 62, 223, 1983.
[3] Moore, D. S. and McCabe, G. P. Introduction to the Practice of
     Statistics, 4th ed. New York: W. H. Freeman, 2002.
[4] Brown, Hall and Young (2001) The smoothed median and the
     bootstrap. Biometrika 88(2):519-534


### Description
Matlab function to import data from [Kenneth French's Data Library](http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html), like the series of the Fama and French 3/5 factors, and much more.
**NOTE**: it does not support a small subset of datasets due to their transposed format. Raise an [issue](https://github.com/okomarov/importFrenchData/issues), if not already reported, to request support for those files, which will be provided depending on the amount of requests.
### Syntax

      Lists available datasets, their ZIPNAMEs and the description.
      Imports into a table the dataset specified by 'ZIPNAME'.

      Specify name and folder where to save the imported data. By default
      the dataset will be saved under the current directory as '.ZIPNAME.mat'

Matrix Permanent using Fast Ryser Algorithm

Uses the Ryser Formula with Gray Coding to calculate the permanent of a matrix. It is O(n(2^n)) which is much faster than the naive algorithm O(n!n). In MATLAB R2008b this method runs four-times faster than the version without Gray Coding. The permanent of a matrix is defined as the analog of determinant where the signs of each term in summation are removed.


Model II regression should be used when the two variables in the regression equation are random and subject to error, i.e. not controlled by the researcher. Model I regression using ordinary least squares underestimates the slope of the linear relationship between the variables when they both contain error. According to Sokal and Rohlf (1995), the subject of Model II regression is one on which research and controversy are continuing and definitive recommendations are difficult to make.
MAREGRESS is a Model II procedure. A bivariate normal distribution can be represented by means of concentric ellipses [mean(Y), mean(X)]. From the analytical geometry an ellipse can be described by two principal axes (major and minor axis). Both axes are at right angles each other. The major axis is the the longest possible axis of the ellipse. To estimate parameters we need sample means, standard deviations and the covariance. Here, the main task is to find the slope and the equation of the major axis of the sample.
The equation of the major axis is defined as:
Y = mean(Y) + b1*(X - mean(X)) = mean(Y) - b1*mean(X) + b1*X = b0 + b1*X

As we can see. The equation involves the means of the two variables and
the slope of the major axis b1. The slope is calculated as,

                 b1 = SYX/[lambda1 - var(Y)]

where SYX is the covariance(Y,X), var(Y) is the variance of Y, and lambda1 the variability (variance) along the major axis [first eigenvalue, latent root or characteristic root of the variance-covariance matrix of Y and X].

Pearson (1901) coined this term. Here we use the procedure given by Sokal and Rohlf (1995 [Box 15.5]).

The fundamentals are from a multivariate statistics, relating with the principal component analysis (PCA).

This procedure must satisfy:
-- a bivariate normal distribution
-- both variables must be in the same physical units or dimensionless
-- error variances of variables are assumed approximately equal (if there is no information on the ratio of the error variances and no reason to believe that it would differ from 1, results must be taken with caution)
-- can be used with dimesionally heterogeneous variables when the purpose analysis is to compare the slopes of the relationships between the two variables measured under different conditions

[B,BINTR] = MAREGRESS(Y,X,ALPHA) returns the vector B of major axis regression coefficients in the linear Model II and a matrix BINT of the given confidence intervals for B.

MAREGRESS treats NaNs in Y or X as missing values, and removes them.

Syntax: function [b,bintr] = maregress(y,x,alpha)

y - dependent variable (must be the first entree)
x - independent variable (must be the second entree)
alpha - significance value

- major axis regression parameters (intercept;slope)
- parameter confidence intervals [lower upper (intercept);lower upper (slope)]


If x is a vector, find the univariate smoothed median of x. If x is a 2D matrix, compute the univariate smoothed median value for each column and return them in a row vector. If the optional argument dim is given, operate along this dimension.
Smoothing the median is achieved by minimizing the following objective function [1]:
S (p) = sum over i < j (((x(i) - p)^2 + (x(j) - p)^2) .^ 0.5)
With the ordinary median as the initial value of p, this function minimizes the objective function by finding the root of the first derivative using a Newton-Bisection hybrid algorithm. By default, the tolerance (Tol) for the first derivative is set at single machine precision.

This smoothing results in a small drop in the breakdown point of the median from 0.5 to a value of 0.341 and has good properties for bootstrap applications. Bootstrap confidence intervals using the smooth median have good coverage for the ordinary median of the population distribution and can be used to obtain second order accurate intervals with Studentized bootstrap and calibrated percentile bootstrap methods [1]. Unlike kernel-based smoothing approaches, bootstrapping smoothmedian does not require explicit choice of a smoothing parameter or a probability density function. The algorithm used is suitable for small-to-mediam sample sizes.

[1] Brown, Hall and Young (2001) The smoothed median and the bootstrap. Biometrika 88(2):519-534


The following files are included:
p_spoly_dist.m - Compute the distances from a set of np points p(i), p(2), ..., p(np) on a unit sphere to the spherical polyline. Spherical polyline is defined as a set of nv-1 great circle arcs connecting nv ordered vertices v(l), v(2), ..., v(nv) on a spherical surface.
In case when all the projected points lie OUTSIDE of all polyline segments, the distance to a closest vertex of the polyline is returned
This function is similar to p_poly_dist function (http://www.mathworks.com/matlabcentral/fileexchange/19398-distance-from-a-point-to-polygon/content/p_poly_dist.m), but instead of Euclidean 2D plane, works on a spherical surface.
test_p_spoly_dist.m - a simple unit test for p_spoly_dist. Plots the results of a call to p_spoly_dist function in 3D
run_s_tests - a script that calls test_p_spoly_dist unit test for different input points and vertices.

Semi-Supervised Normalized Cuts for Image Segmentation

Performs semi-supervised image segmentation using the algorithm described in:
S. E. Chew and N. D. Cahill, "Semi-Supervised Normalized Cuts for Image Segmentation," Proc. International Conference on Computer Vision (ICCV), 2015.

Also contains implementations of other image segmentation approaches based on the Normalized Cuts algorithm and its generalizations, including the algorithms described in the following papers:

J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, Aug 2000.

S. X. Yu and J. Shi. Segmentation given partial grouping constraints. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(2):173–183, Feb 2004.

A. Eriksson, C. Olsson, and F. Kahl. Normalized cuts revisited: A reformulation for segmentation with linear grouping constraints. Journal of Mathematical Imaging and Vision, 39(1):45–61, 2011.

S.Maji, N. K. Vishnoi, and J.Malik. Biased normalized cuts. Proc. Computer Vision and Pattern Recognition (CVPR), 2057–2064, 2011.

All algorithms can be applied to an example image by running exampleScript.m.

Axis XX

This package will allow you to plot data on multiple X or Y axis. Similar to MATLAB's own plotyy function, but less limiting.
Create any combination of X or Y axes, plot multiple lines on each X or Y axis
Use any plotting function you specify (plot, line, patch, etc)
Compatiable with MATLAB's zoom, pan and data cursor tools
Figure is resizeable and rescales objects around colorbars if they are present
In a massive break from the structure of the old AddAxis 6, the code has completely been restructured into classes, which are not compatiable with the old function based code. The legacy AddAxis6 code is included in case you need it.
Originally inspired by AddAxis 5 by Harry Lee
Hint: If you need to change the figure resize callback function make sure you include a call to addaxisresizefcn in your callback function.
Check the example.m file for some quick tips.
If you find any bugs or have suggestions please write in the comments box or message me. Enjoy :)


basic code for histogram equalization for image processing course

Post Selected Items to:

Showing 10 items of 876

home  •   advertising  •   terms of service  •   privacy  •   about us  •   contact us  •   press release design by Popshop •   Official PR partner PRNews.io •   © 1999-2015 NewsKnowledge