Loading...

Messages

Proposals

Stuck in your homework and missing deadline?

Get Urgent Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework Writing

100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Assignment on Metadata in Supply Chain

Category: Engineering Paper Type: Assignment Writing Reference: APA Words: 3400

The scheme of metadata is available and geared to various communities as well as various needs. The scheme of metadata is also explored through the structuring of metadata elements. From this we learned the schemes of metadata to promote consistency and uniformity of data so that it can be easily aggregated, moved, shared, and ultimately used as a resource discovery tool or for other purposes. They also distinguish between schemas designed to describe individual items and collections as a whole. From this we learned the about metadata format,  it is used to format most Web pages), XML is intended to describe data: it is a hardware- and software-independent format that allows information to be structured, stored, and exchanged between what might otherwise be incompatible system.  HTML's tags are predetermined and largely limited to specifying format and display. By contrast, XML is a simplified subset of SGML. XML is "extensible" and flexible because its tags are unlimited, and thus anyone can invent and share a set of coded tags for a particular purpose, provided they follow XML rules.

Source 1: Bhosale, V. A., & Kant, R. (2016). Metadata analysis of knowledge management in the supply chain. Business Process Management Journal.

Abstract of Metadata in Supply Chain:

The purpose of this article is to provide a complete and valuable overview of Information Management in supply chain research. This research aimed to identify gaps and upcoming suggestions for metadata research. This research is based on a literature review that includes an analysis of IM metadata in SC research based on various dimensions that are related to metadata and existing dimensions reported by literature. The researcher presented in the current area to predict the future role of supply chain management. The study aims to inspect the status of IM in Supply Chain in university and engineering research over the past 12 years. Methodical and organized literature that searched related to information management in Supply chain is based on 170 peer-reviewed articles, 

Author Credentials of Metadata in Supply Chain:

Vishal Ashok Bhosale Ravi Kant is the author of this research study. The author has a specialization in supply chain management and in this document he has review and analyzes the dimension of the metadata and supply chain management.

Intended Reader of Metadata in Supply Chain:

This document will assist information management and Supply Chain readers, researcher university scholars, and experts in this field. This research conducted in a positive way by emerging new chances, i.e., producing value, gaining a modest advantage and refining SC show to achieve business goals.

What I learned of Metadata in Supply Chain:

This study provides an over-all overview of the present research on information management in supply chain management research. The research was found through a methodical and complete analysis of 170 articles, which was selected from 98 different journals throughout the period 2001-2014. The consequence scrutinizes the articles examined according to the journals concerned, the number of articles per year, and a country in which the research was conducted, information of the authors involved, the research design and the research methods used in this researches and main subject discussed.

I found a clear picture of all of the selected articles the review that has been available in all related journals. Grounded on the consequences, many inferences enhance the thoughtful identity of information management that used in the supply chain as an independent technical field.

Source 2: Lamba, M., & Madhusudhan, M. (2019). Metadata Tagging and Prediction Modeling: Case Study of DESIDOC Journal of Library and Information Technology (2008–17). World Digital Libraries-An international journal12(1), 33-89.

Abstract: This article defines the rank and use of metadata labeling and predictive demonstrating tools for investigators and librarians. For the 2008-2017 period, 386 published emerging articles were downloaded from the Journal that named as Libraries and Information Technology. This research study remained distributed into two segments. In the first segment, the main themes of the research article those in which researchers were recognized using the Subject Modeling Toolkit, whereas, in the second segment, a predictive analysis using the Rapid Miner toolbox was used. comment on future research articles based on the modeled subjects.

Author Credentials: Lamba is a Research Scholar in the Department of Library and Information Science, University of Delhi, Delhi and Co-Author are Madhusudhan that working as Associate Professor and former Deputy Dean (Academics), Department of Library and Information Science, University of Delhi, Delhi.

Intended Reader: The intended reader for this research is the organization of supply chain management that used metadata.

What I learned: I have learned about digital libraries, information literacy, other open access sources and collection resources used in libraries for the period under review. In this study, the scientific articles were observed based on the subjects modeled to offer users a better research experience. In a very effective way, this source is describing the current standing and use of metadata labeling and predictive modeling tools for researchers and the members of the library.

Topic 2: Digital Libraries and Metadata

Source 1: McQuilton, P., Gonzalez-Beltran, A., Rocca-Serra, P., Thurston, M., Lister, A., Maguire, E., & Sansone, S. A. (2016). Bio Sharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences. Database2016.

Abstract: Bio Sharing is a manual platform and searchable portal with three related catalogs. These online resources include principles or standards that used in terminology or formats and in some organizations used in templates, and reporting rules, catalogs, and guidelines for data in the natural life of sciences, which broadly encompass organic, ecological, and biomedical sciences that are used metadata. Bio Sharing was launched in 2010 and applied by a similar central team as the successful bio sharing portal. It uses public conservation to collect and access life science assets from around the world. Bio Sharing brands these resources discoverable and available (at the heart of the FAIR principle). Each metadata set is designed so that it can be linked to each other. It contains a comprehensive explanation not merely of the resource, metadata has associations with other biological sciences infrastructure. Bio Sharing serves a multitude of interest groups and maintains an increasing municipal, for which it also offers many welfares. Many resources are available for funding supports and magazine producers to direct the countryside of life science metadata. an instructive and engineering resource for librarians and data advisers; an advertising display place for developers/overseers of databases and standards; and an exploration tool for banks and IT professionals to make a strategy for their work. Bio Sharing and metadata is working with a cumulative number of publications and other archives, here is one example to link metadata standards and databases to keeping fit materials and tackles. In this research, metadata collaboration with investigators, librarians, developers and other, researcher describe Bio Sharing and metadata with a specific focus on community-level conservation.

Author Credentials: McQuilton, P., Gonzalez-Beltran, A., Rocca-Serra, P. et al. BioSharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences.

Intended Reader: The intended reader for this study is the working number of magazines, online data sources and other archives, and who want to develop metadata standards and used online databases to training their employees related to any materials and tools.

What I learned: Metadata and Bio Sharing is an organized and searchable portal with related information on comfortable principles, catalogs and (progressive) guidelines from journals and donors in the biosciences, which broadly encompass the biotic, conservational and biomedical disciplines.

I learned about the metadata Standards and database managers, this research told about the may how many users face difficult to gain reflectivity into their resources to encourage acceptance and support. However, librarians, data authorities, financial supports and magazine editors frequently lack the resources to make a knowledgeable assessment of the catalog or typical recommendation to their operator group. Metadata and Bio Sharing, both are consisting of catalogs covering overall content management and standards, furthermore, it also linked with databases and data strategies in the lifetime sciences. It aims to map the countryside of standards and databases industrialized by the communal and link them with and after these to data guidelines from funding supports and magazines or any content publishers. Metadata and Bio Sharing have goals to indorse harmonization and steadiness and to decrease reinvention and the unnecessary dissemination of principles and databases. Bio Sharing is a central reserve for implementing the fair principles supported by many organizations, which defines the features that have in modern data resources, the new technological tools and infrastructures should have - practical, available, interoperable and recyclable for next parties.

Source 2:  Hakala, J. (2019). Metadata expert from Japan. Informaatiotutkimus38(1).

Abstract: In this article, the Metadata has not a popular area of research in Finland, but it is popular abroad. In Finland, one of the library school that has analyzed the production and usage of metadata. This article is qualitative in nature and interview as a tool used. The extracts provided by search engines like Google are metadata. This is what users read first, as searching for information on the web usually starts with these services. Wikipedia articles and/or end-user-oriented websites that describe the destination resources would be the second step because search engines place a high value on the priority. Institutional metadata, e.g. B. library catalog and museum catalog, would be the last jump to access materials stored in (virtual) libraries /museums. Users can find links to these resources, for example, Wikipedia has links to various library services. However, these links are not common enough.

Author Credentials: The Author is Juha Hakala that working as Senior adviser at The National Library of Finland. He took an interview of the Shigo Sugimoto. Shigeo Sugimoto became an associate professor at the University of Libraries and Information Science in 1986, just a year after graduating from engineering school. Since 2002, he has been a professor at the Faculty of Library, Information and Media Studies at Tsukuba University. He studied computer science but became a computer scientist early in his university career. This change was a bit of a culture shock, not least because most of the computer scientists in Japan are also men, while the majority of students in the library and information science were women. In addition to his successful academic career, Shigeo has been entrusted with numerous national and international relationships of trust. Some of them were listed at the end of this article. This responsibility also affected his research interests.

Intended Reader: The intended reader is content management organization and metadata libraries that working in schools of Finland.

What I learned: I have learned from this article that metadata aggregation is a key technology for bridging the gap between end-users and providers (e.g. storage) such as Wikipedia articles, but the metadata provided by the storage institution is mainly based on manifestations or elements. It would be necessary to close the gap between these ends. Another problem is to link metadata across time intervals, e.g. a memory that records from the 1950s to those of the 2000s.

Topic 3: Dublin core

Source 1: Maron, D., & Feinberg, M. (2018). What does it mean to adopt a metadata standard? A case study of Omeka and the Dublin Core. Journal of Documentation.

Abstract: An article aims to use a case study of the organization named as Omeka content management system. Omeka is defined as a "Free, flexible and open-source web publishing platform for consulting libraries, museums, archives, and university collections and exhibitions". This article shows how the acceptance and application of a metadata standard (in this case, Dublin Core) are possible to lead to contradictory rhetorical arguments concerning the use, excellence, and consistency of metadata. In the Omeka organization; for example, the researcher illustrates a conceptual separation between two participants in metadata - the creators of standards and standard users - operationalization of the quality of metadata. For standard developers like the Dublin Core Community, the quality of metadata implies the correct implementation of a standard according to the specified use principles; On the other hand, for standard users like Omeka, the quality of metadata implies the simple adoption of the standard, regardless of proper use and associated principles. The article used an approach constructed on rhetorical censure. The purpose of the document is to determine whether the objectives of Omeka (the position that Omeka takes concerning Dublin Core) are consistent with the key objectives of Omeka (Omeka's real disagreement concerning Dublin Core). To analyze this situation, the article examines both written evidence (what Omeka says that found through the document) and material informal evidence (what Omeka does).

Author Credentials: The first author is Deborah Maron that has done a Master in Digital Culture & Technology. He has done MS, library and Information Science. Currently working as Ph.D. Scholar at School of Information and Library Science, the University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA, 

Second Co-Author named Melanie Feinberg that also currently doing a Ph.D. at School of Library and Information Science (SILS), the University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA. Both are working in the same institute and conducted this research in 2018.

Intended Reader: The intended reader that focused on this research that the creators of standards and standard users of Metadata. The researcher analyzed the actual situation through comparison and document analysis. This research is related to organizations that are using the Dublin Core metadata standard.

What I learned: This research is based on two main objectives that I have learned about the Dublin Core metadata standards that are discussed very briefly. The researcher explained some important aspects that I learned as the Dublin Core metadata standard includes 15 elements (such as Title and Creator) to offer an essential explanation of a slightly satisfied reserve. While Dublin Core was shaped to provide merely the most fundamentals of explanation. The Dublin Core is frequently used as the only explanation arrangement in a digital library to facilitate the spread and combination of basic data evidence between systems. Dublin Core is the standard depiction arrangement for Omeka elements. The 15 basics elements of the Dublin Core are formally recognized as ISO (2009) and NISO (2012) standards.

One thing very interesting that I have learned that the Dublin Core Metadata Initiative (DCMI) provides supplementary Dublin Core certification and other related documents that provide help regarding coding syntax, an intellectual template, and practice guidelines. DCMI promoters a yearly conference for metadata researchers and the practitioners, as well as a series of ongoing webinars.

This study helps in understanding the concept of metadata. Furthermore, it also contributes to our thinking about how metadata principles are understood and used in practice. Some users do not give as much importance to interoperability and metadata combination as to the Dublin communal Core. This designates that while definite values ​​concerning the acceptance of standards are universal in the metadata communal, these standards ​​are not common by everybody involved in a digital library environment. The method in which standards developers (Dublin Core) comprehend what it resources to adopt a standard differs from how standard users (Omeka) appreciate what it resources to adopt this standard.

Although I have found that Omeka seems to argue that the adoption of the Dublin Core is an essential part of Omeka's assignment, the absence of support from the platform for the application and operation of the Dublin Core makes a disagreement opposite. Eventually, Omeka contends that the entrance of accepting a standard is more significant than its careful execution.

Source 2: Mathieu, C. (2017). Practical application of the Dublin Core standard for enterprise metadata management. Bulletin of the Association for Information Science and Technology43(2), 29-34.

Abstract: JPL currently does not have an official metadata standard for inner content, while efforts have been made at regular intervals for developing corporate terminologies or describe basic metadata characteristics over the past period. In recent times, members of the JPL library and many other investors have made new calibration efforts. The goal is to make a standard schema. The aims of this development can be used to define the inner content and information of the JPL. The library is linked with the content regardless of where that content is located. The stakeholders in this effort comprised not only information authorities from the JPL library. They also have repositories and claim directors. The content or information that they write or achieve needs standardized metadata to sufficiently label it. In the initial stages of developing the JPL plan, present content metadata from manifold sources was planned to a modest, competent, or practice standard component ground to control how many JPL-specific metadata goods are reinforced by Dublin Core recognized. Many customers need improvements to the standard schema to continue valuable in business processes.

Author Credentials: The author's name is Camille Mathieu, he works at JPL. The Jet Propulsion Laboratory (JPL) performed a standardization measure and created an inner content scheme founded on recognized metadata field values that are independent of information, content, and presentation but adaptable locally. There is a need for a variety of online sources that are linked with the Dublin core.

Intended Reader: The intended reader is employees of the content management organization and wants to establish metadata in an organization.

What I learned: I have learned about the Dublin Core standard, as described in the international standard that is known as ISO 15836 5 and in additional detail on the Dublin Core Metadata Initiative (DCMI) website 6, which was chosen in this study as the basis for the JPL resource schema. Researcher after a review of several metadata standards established. The Dublin Core standard falls in the category of the predefined constraints or tool defined by JPL stakeholders because it is over-all enough to label a diversity of content, but also advanced and adaptable sufficient to be used in certain instances of presentation. For reasons of precision, the metadata characteristics distinct or authorized by the Dublin Core standard can be divided into three clusters:

▪ Simple Dublin Core consists of the 15 unique basics that were primary well-defined by the Dublin Core metadata workshop that was conducted in the mid-1990s. Although this classification is theoretically valuable. However, according to the first group that is known as Simple Dublin Core is a term somewhat outdated which is here and now summarized as primary properties in dc / Terms / Namespace.

▪ Qualified Dublin Core properties are improvements to the 15 original elements but the difference between this and the previous group is that (with some accompaniments and addition of elements recently defined by the DCMI) that are currently managed in the DC / Terms / Namespace.

▪ Dublin Core custom possessions are practice enhancements to domain controllers/terms / controlled elements created by native plan developers. Although the Dublin Core standard does not allow demanding of personalized articles, it does allow personalized refinement of existing articles via the principle of simplification. This belief states that local modifications of Dublin building blocks are reinforced as extended as outside applications can "ignore each qualifier and use the description as if it were unqualified". Compliance with this standard confirms that all company-specific metadata can be deleted by external systems and stopped, even if the specificity weakens somewhat since all user-defined fundamentals are sub-properties of the Dublin Core essentials.

References of Metadata in Supply Chain

Bhosale, V. A., & Kant, R. (2016). Metadata analysis of knowledge management in the supply            chain. Business Process Management Journal.

Hakala, J. (2019). Metadata expert from Japan. Informaatiotutkimus38(1).

Lamba, M., & Madhusudhan, M. (2019). Metadata Tagging and Prediction Modeling: Case         Study of DESIDOC Journal of Library and Information Technology (2008–17). World        Digital Libraries-An international journal12(1), 33-89.

Maron, D., & Feinberg, M. (2018). What does it mean to adopt a metadata standard? A case study           of Omeka and the Dublin Core. Journal of Documentation.

Mathieu, C. (2017). Practical application of the Dublin Core standard for enterprise metadata            management. Bulletin of the Association for Information Science and Technology43(2), 29-34.

McQuilton, P., Gonzalez-Beltran, A., Rocca-Serra, P., Thurston, M., Lister, A., Maguire, E., &    Sansone, S. A. (2016). Bio Sharing: curated and crowd-sourced metadata standards,     databases and data policies in the life sciences. Database2016.

Our Top Online Essay Writers.

Discuss your homework for free! Start chat

Best Coursework Help

ONLINE

Best Coursework Help

1554 Orders Completed

Assignment Helper

ONLINE

Assignment Helper

21 Orders Completed

Financial Analyst

ONLINE

Financial Analyst

1596 Orders Completed